With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
Depth cues are known to be useful for visual perception. However, direct measurement of depth is often impracticable. Fortunately, though, modern learning-based methods offer promising depth maps by inference in the wild. In this work, we adapt such depth inference models for object segmentation using the objects' ``pop-out'' prior in 3D. The ``pop-out'' is a simple composition prior that assumes objects reside on the background surface. Such compositional prior allows us to reason about objects in the 3D space. More specifically, we adapt the inferred depth maps such that objects can be localized using only 3D information. Such separation, however, requires knowledge about contact surface which we learn using the weak supervision of the segmentation mask. Our intermediate representation of contact surface, and thereby reasoning about objects purely in 3D, allows us to better transfer the depth knowledge into semantics. The proposed adaptation method uses only the depth model without needing the source data used for training, making the learning process efficient and practical. Our experiments on eight datasets of two challenging tasks, namely camouflaged object detection and salient object detection, consistently demonstrate the benefit of our method in terms of both performance and generalizability.
translated by 谷歌翻译
Visual anomaly detection plays a crucial role in not only manufacturing inspection to find defects of products during manufacturing processes, but also maintenance inspection to keep equipment in optimum working condition particularly outdoors. Due to the scarcity of the defective samples, unsupervised anomaly detection has attracted great attention in recent years. However, existing datasets for unsupervised anomaly detection are biased towards manufacturing inspection, not considering maintenance inspection which is usually conducted under outdoor uncontrolled environment such as varying camera viewpoints, messy background and degradation of object surface after long-term working. We focus on outdoor maintenance inspection and contribute a comprehensive Maintenance Inspection Anomaly Detection (MIAD) dataset which contains more than 100K high-resolution color images in various outdoor industrial scenarios. This dataset is generated by a 3D graphics software and covers both surface and logical anomalies with pixel-precise ground truth. Extensive evaluations of representative algorithms for unsupervised anomaly detection are conducted, and we expect MIAD and corresponding experimental results can inspire research community in outdoor unsupervised anomaly detection tasks. Worthwhile and related future work can be spawned from our new dataset.
translated by 谷歌翻译
ASR can be improved by multi-task learning (MTL) with domain enhancing or domain adversarial training, which are two opposite objectives with the aim to increase/decrease domain variance towards domain-aware/agnostic ASR, respectively. In this work, we study how to best apply these two opposite objectives with speaker labels to improve conformer-based ASR. We also propose a novel adaptive gradient reversal layer for stable and effective adversarial training without tuning effort. Detailed analysis and experimental verification are conducted to show the optimal positions in the ASR neural network (NN) to apply speaker enhancing and adversarial training. We also explore their combination for further improvement, achieving the same performance as i-vectors plus adversarial training. Our best speaker-based MTL achieves 7\% relative improvement on the Switchboard Hub5'00 set. We also investigate the effect of such speaker-based MTL w.r.t. cleaner dataset and weaker ASR NN.
translated by 谷歌翻译
自我训练在半监督学习中表现出巨大的潜力。它的核心思想是使用在标记数据上学习的模型来生成未标记样本的伪标签,然后自我教学。为了获得有效的监督,主动尝试通常会采用动量老师进行伪标签的预测,但要观察确认偏见问题,在这种情况下,错误的预测可能会提供错误的监督信号并在培训过程中积累。这种缺点的主要原因是,现行的自我训练框架充当以前的知识指导当前状态,因为老师仅与过去的学生更新。为了减轻这个问题,我们提出了一种新颖的自我训练策略,该策略使模型可以从未来学习。具体而言,在每个培训步骤中,我们都会首先优化学生(即,在不将其应用于模型权重的情况下缓存梯度),然后用虚拟未来的学生更新老师,最后要求老师为伪标记生产伪标签目前的学生作为指导。这样,我们设法提高了伪标签的质量,从而提高了性能。我们还通过深入(FST-D)和广泛(FST-W)窥视未来,开发了我们未来自我训练(FST)框架的两个变体。将无监督的域自适应语义分割和半监督语义分割的任务作为实例,我们在广泛的环境下实验表明了我们方法的有效性和优越性。代码将公开可用。
translated by 谷歌翻译
尽管利用对抗性示例的可传递性可以达到非目标攻击的攻击成功率,但它在有针对性的攻击中不能很好地工作,因为从源图像到目标类别的梯度方向通常在不同的DNN中有所不同。为了提高目标攻击的可转移性,最近的研究使生成的对抗示例的特征与从辅助网络或生成对抗网络中学到的目标类别的特征分布保持一致。但是,这些作品假定培训数据集可用,并且需要大量时间来培训网络,这使得很难应用于现实世界。在本文中,我们从普遍性的角度重新审视具有针对性转移性的对抗性示例,并发现高度普遍的对抗扰动往往更容易转移。基于此观察结果,我们提出了图像(LI)攻击的局部性,以提高目标传递性。具体而言,Li不仅仅是使用分类损失,而是引入了对抗性扰动的原始图像和随机裁剪的图像之间的特征相似性损失,这使得对抗性扰动的特征比良性图像更为主导,因此提高了目标传递性的性能。通过将图像的局部性纳入优化扰动中,LI攻击强调,有针对性的扰动应与多样化的输入模式,甚至本地图像贴片有关。广泛的实验表明,LI可以实现基于转移的目标攻击的高成功率。在攻击Imagenet兼容数据集时,LI与现有最新方法相比,LI的提高为12 \%。
translated by 谷歌翻译
典型的文本检测器遵循两阶段的发现策略:首先检测文本实例的精确边界,然后在定期的文本区域内执行文本识别。尽管这种策略取得了实质性进展,但有两个基本的局限性。 1)文本识别的性能在很大程度上取决于文本检测的精度,从而导致从检测到识别的潜在误差传播。 2)桥接检测和识别的ROI种植会带来背景的噪音,并在合并或从特征地图中插值时导致信息丢失。在这项工作中,我们提出了单个镜头自力更生的场景文本sottter(SRSTS),该场景通过将识别解除识别来规避这些限制。具体而言,我们并行进行文本检测和识别,并通过共享的积极锚点架起它们。因此,即使确切的文本边界要检测到具有挑战性,我们的方法也能够正确识别文本实例。此外,我们的方法可大大降低文本检测的注释成本。在常规基准和任意形状的基准上进行了广泛的实验表明,就准确性和效率而言,我们的SRST与以前的最先进的观察者相比有利。
translated by 谷歌翻译
本文提出了一种凝视校正和动画方法,用于高分辨率,不受约束的肖像图像,可以在没有凝视角度和头部姿势注释的情况下对其进行训练。常见的凝视校正方法通常需要用精确的注视和头姿势信息对培训数据进行注释。使用无监督的方法解决此问题仍然是一个空旷的问题,尤其是对于野外高分辨率的面部图像,这并不容易用凝视和头部姿势标签注释。为了解决这个问题,我们首先创建两个新的肖像数据集:Celebgaze和高分辨率Celebhqgaze。其次,我们将目光校正任务制定为图像介绍问题,使用凝视校正模块(GCM)和凝视动画模块(GAM)解决。此外,我们提出了一种无监督的训练策略,即训练的综合训练,以学习眼睛区域特征与凝视角度之间的相关性。结果,我们可以在此空间中使用学习的潜在空间进行凝视动画。此外,为了减轻培训和推理阶段中的记忆和计算成本,我们提出了一个与GCM和GAM集成的粗到精细模块(CFM)。广泛的实验验证了我们方法对野外低和高分辨率面部数据集中的目光校正和凝视动画任务的有效性,并证明了我们方法在艺术状态方面的优越性。代码可从https://github.com/zhangqianhui/gazeanimationv2获得。
translated by 谷歌翻译
大多数现有的视觉语言预训练方法侧重于在预先绘制期间了解解决任务并使用伯特样目标(屏蔽语言建模和图像 - 文本匹配)。虽然它们在许多理解下游任务中表现良好,但是,例如,视觉问题应答,图像文本检索和视觉存在,它们没有生成的能力。为了解决这个问题,我们为视觉语言理解和一代(UNIVL)提出了统一的多模式预培训。建议的UNIVL能够处理理解任务和生成任务。我们增强了现有的预押范例,只使用带有因果面罩的随机掩模,即掩盖未来令牌的三角面具,使得预先接受的模型可以通过设计具有自动发育能力。我们将几个以前的理解任务作为文本生成任务制定,并建议使用基于提示的方法来进行不同的下游任务进行微调。我们的实验表明,在使用相同型号的同时了解任务和生成任务之间存在权衡,以及改善两个任务的可行方式是使用更多数据。我们的UNIVL框架可以在近似验证任务和生成任务中获得最近的愿景预培训方法的性能。此外,我们开展了基于及时的FineTuning更具数据效率 - 在几次拍摄场景中表现出差异的方法。
translated by 谷歌翻译